- Temu vs. Amazon: Which shopping site is best for your buying needs?
- ANZ CIO Challenges: AI, Cybersecurity & Data Analytics for 2025
- Want generative AI LLMs integrated with your business data? You need RAG
- AI could alter data science as we know it - here's why
- The best external hard drives of 2024: Expert tested
Cisco UCS Direct-Attached SAN
Cisco UCS Direct-Attached SAN
New in Cisco UCS firmware 2.1 is the ability to directly-attached a fiber-channel SAN to the Fabric Interconnects; to see all the other cool features head on over to UCS Guru’s blog. A few years ago, Cisco supported this very briefly before changing their minds because your sole method of governing access to the SAN was LUN Masking and not zoning. In firmware 2.1 Cisco flipped the switch and added support for zoning on the Fabric Interconnects, allowing FC SANs to be directly connected to the FIs. I set this up for a customer of ours and it’s not really that intuitive, so read on!
Set FC Switch Mode
First things first, you need to set the Fabric Interconnects into FC Switch Mode (this will require a reboot). In UCSM, navigate to the Equipment tab > Fabric Interconnect A/B > General tab > Actions> Set Fibre Channel Switching Mode. Now repeat for other FI.
Create VSANs for Zoning (one per FI)
If previously connected to SAN switches (your zoning was on an upstream SAN switch, but the storage array was connected to the FIs), disconnect and run the clear-unmanaged-fc-zone-all command on each VSAN on each FI. In UCSM, navigate to the SAN tab > SAN Cloud > VSANs > Add and enter the following (making sure to select FC Zoning Enabled):
Designate Storage Ports
In UCSM, navigate to the Equipment tab > Fabric Interconnect A/B > Configure Unified Ports. Use the slider to configure the ports connecting to the storage array as FC:
Repeat for the other Fabric Interconnect and wait for them each to reboot. In UCSM, navigate to the SAN tab > Storage Cloud > create the two VSANs as earlier (this is specific to storage ports):
Select each of the FC Ports on the Equipment tab and assign the Storage Cloud VSAN to them:
Create Storage Connection Policies (one per FI)
In UCSM, SAN tab > Policies > right-click Storage Connection Policies > Create. In Storage Connection Policies, enter the…
Name
Zoning = Single Initiator/Single Target
Add FC Target Endpoints… (two endpoints per FI)
In FC Target Endpoints, enter the…
WWPN of one of the array HBAs
Description
Path (select FI A or B)
VSAN (created earlier)
Repeat for other target endpoints
Repeat for other Storage Connection Policies (i.e. other FI).
Create vHBA Initiator Groups (one per vHBA)
In UCSM, navigate to SAN tab > Policies > SAN Connectivity Policies > select the policy in question > vHBA Initiator Groups tab. Click Add > enter the…
Select the vHBA
Name
Description
Storage Connection Policy (created earlier)
Repeat for other vHBA. Word to the wise (because this tripped me up), don’t do this in the SP Template Wizard if you’re using a SAN Connectivity Policy.
Verify Zoning/Flogi
At this point, you’ve associated the SAN Connectivity Policy to your Service Profile Template and associated your newly-created Service Profiles with your blades and they’ve booted (and thus logged into the SAN). In order to verify the zoning is correct, you can utilize the command line in the NXOS context of the Fabric Interconnects. Coming from zoning on MDSs and Nexus switches, this was a VERY welcome aid in troubleshooting (for instance, I had a typo on the storage array’s WWPNs so my blades couldn’t see their boot LUNs).
UCS-A# connect nxos
UCS-A(nxos)# show flogi database
——————————————————————————–
INTERFACE VSAN FCID PORT NAME NODE NAME
——————————————————————————–
fc1/31 100 0xe100ef 50:06:01:60:3e:e0:16:fc 50:06:01:60:be:e0:16:fc
fc1/32 100 0xe101ef 50:06:01:68:3e:e0:16:fc 50:06:01:60:be:e0:16:fc
vfc707 100 0xe10000 20:00:00:25:b5:aa:00:8f 20:00:00:25:b5:00:00:8fTotal number of flogi = 3.
UCS-A(nxos)# show zone set active
zoneset name ucs-UCS-vsan-100-zoneset vsan 100
zone name ucs_UCS_A_2_HS-ESX01_vhba1 vsan 100
* fcid 0xe10000 [pwwn 20:00:00:25:b5:aa:00:8f]
* fcid 0xe101ef [pwwn 50:06:01:68:3e:e0:16:fc]zone name ucs_UCS_A_1_HS-ESX02_vhba1 vsan 100
pwwn 20:00:00:25:b5:aa:00:9f
* fcid 0xe101ef [pwwn 50:06:01:68:3e:e0:16:fc]zone name ucs_UCS_A_6_HS-ESX01_vhba1 vsan 100
* fcid 0xe10000 [pwwn 20:00:00:25:b5:aa:00:8f]
* fcid 0xe100ef [pwwn 50:06:01:60:3e:e0:16:fc]zone name ucs_UCS_A_5_HS-ESX02_vhba1 vsan 100
pwwn 20:00:00:25:b5:aa:00:9f
* fcid 0xe100ef [pwwn 50:06:01:60:3e:e0:16:fc]UCS-B(nxos)# show flogi database
——————————————————————————–
INTERFACE VSAN FCID PORT NAME NODE NAME
——————————————————————————–
fc1/31 200 0x2c00ef 50:06:01:61:3e:e0:16:fc 50:06:01:60:be:e0:16:fc
fc1/32 200 0x2c01ef 50:06:01:69:3e:e0:16:fc 50:06:01:60:be:e0:16:fc
vfc708 200 0x2c0000 20:00:00:25:b5:bb:00:8f 20:00:00:25:b5:00:00:8fTotal number of flogi = 3.
UCS-B(nxos)# sh zoneset active
zoneset name ucs-UCS-vsan-200-zoneset vsan 200
zone name ucs_UCS_B_8_HS-ESX01_vhba2 vsan 200
* fcid 0x2c0000 [pwwn 20:00:00:25:b5:bb:00:8f]
* fcid 0x2c01ef [pwwn 50:06:01:69:3e:e0:16:fc]zone name ucs_UCS_B_7_HS-ESX02_vhba2 vsan 200
pwwn 20:00:00:25:b5:bb:00:9f
* fcid 0x2c01ef [pwwn 50:06:01:69:3e:e0:16:fc]zone name ucs_UCS_B_6_HS-ESX01_vhba2 vsan 200
* fcid 0x2c0000 [pwwn 20:00:00:25:b5:bb:00:8f]
* fcid 0x2c00ef [pwwn 50:06:01:61:3e:e0:16:fc]zone name ucs_UCS_B_5_HS-ESX02_vhba2 vsan 200
pwwn 20:00:00:25:b5:bb:00:9f
* fcid 0x2c00ef [pwwn 50:06:01:61:3e:e0:16:fc]
You can also utilize UCS Manager to verify the zoning:
On a final note, I had to mount the ESXi ISO to the KVM session to force the flogi. Could’ve been a fluke, but maybe that’ll help you too. Once the vHBAs have logged in, I was able to see them in Unisphere and register them, create the boot LUNs, setup the Storage Groups, etc.:
Conclusion
For smaller environments, those that don’t require any FC access to a storage array outside of the UCS, this is a really cool solution. Many other times we’ve had to quote SAN switches just to zone the UCS environment to the SAN. To preempt any questions on if you can still zone outside host HBAs (i.e. rack-mount servers with HBAs, directly cabled into the Fabric Interconnect FC ports), I don’t believe you can. I haven’t heard anything definitive from the Cisco SEs that I work with, but there is no way to zone outside of vHBA Initiator Groups (tied to Service Profiles) and the FC ports on the Fabric Interconnects can either be FC Uplink (i.e. to the SAN switches if using them) or FC Storage (i.e. as a target storage array serving up storage, referenced in the SAN Connectivity Policy). This solution isn’t a replacement for using a FC switch when you need one, but for environments when only the UCS will need FC access, it’s pretty cool!